PAC Learning with Simple Examples

نویسندگان

  • François Denis
  • Cyrille D'Halluin
  • Rémi Gilleron
چکیده

We deene a new PAC learning model. In this model, examples are drawn according to the universal distribution m(: j f) of Solomomoo-Levin, where f is the target concept. The consequence is that the simple examples of the target concept have a high probability to be provided to the learning algorithm. We prove an Occam's Razor theorem. We show that the class of poly-term DNF is learnable, and the class of k-reversible languages is learnable from positive data, in this new model.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning DFA for Simple Examples

We present a framework for learning DFA from simple examples. We show that e cient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identi able if examples are drawn from the uni...

متن کامل

Boosting a Simple Weak Learner For Classifying Handwritten Digits

A weak PAC learner is one which takes labeled training examples and produces a classifier which can label test examples more accurately than random guessing. A strong learner (also known as a PAC learner), on the other hand, is one which takes labeled training examples and produces a classifier which can label test examples arbitrarily accurately. Schapire has constructively proved that a stron...

متن کامل

PAC Learning under Helpful Distributions

A PAC model under helpful distributions is introduced. A teacher associates a teaching set with each target concept and we only consider distributions such that each example in the teaching set has a non-zero weight. The performance of a learning algorithm depends on the probabilities of the examples in this teaching set. In this model, an Occam's razor theorem and its converse are proved. The ...

متن کامل

A Note on Learning from Multiple - InstanceExamplesAVRIM BLUM

We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classiication noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also de...

متن کامل

Smart PAC-Learners

The PAC-learning model is distribution-independent in the sense that the learner must reach a learning goal with a limited number of labeled random examples without any prior knowledge of the underlying domain distribution. In order to achieve this, one needs generalization error bounds that are valid uniformly for every domain distribution. These bounds are (almost) tight in the sense that the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996